Matrix Exponential Gradient Updates for On-line Learning and Bregman Projection

نویسندگان

  • Koji Tsuda
  • Gunnar Rätsch
  • Manfred K. Warmuth
چکیده

We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: On-line learning with a simple square loss and finding a symmetric positive definite matrix subject to symmetric linear constraints. The updates generalize the Exponentiated Gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the analysis of each algorithm generalizes to the non-diagonal case. We apply both new algorithms, called the Matrix Exponentiated Gradient (MEG) update and DefiniteBoost, to learn a kernel matrix from distance measurements.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection

We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positi...

متن کامل

Analysis and Generalizations of the Linearized Bregman Method

This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing ...

متن کامل

Extended Formulations for Online Linear Bandit Optimization

On-line linear optimization on combinatorial action sets (d-dimensional actions) with bandit feedback, is known to have complexity in the order of the dimension of the problem. The exponential weighted strategy achieves the best known regret bound that is of the order of d √ n (where d is the dimension of the problem, n is the time horizon ). However, such strategies are provably suboptimal or ...

متن کامل

A New Perspective of Proximal Gradient Algorithms

We provide a new perspective to understand proximal gradient algorithms. We show that both proximal gradient algorithm (PGA) and Bregman proximal gradient algorithm (BPGA) can be viewed as generalized proximal point algorithm (GPPA), based on which more accurate convergence rates of PGA and BPGA are obtained directly. Furthermore, based on GPPA framework, we incorporate the back-tracking line s...

متن کامل

A weighted denoising method based on Bregman iterative regularization and gradient projection algorithms

A weighted Bregman-Gradient Projection denoising method, based on the Bregman iterative regularization (BIR) method and Chambolle's Gradient Projection method (or dual denoising method) is established. Some applications to image denoising on a 1-dimensional curve, 2-dimensional gray image and 3-dimensional color image are presented. Compared with the main results of the literatures, the present...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Journal of Machine Learning Research

دوره 6  شماره 

صفحات  -

تاریخ انتشار 2004